A generalized Frank–Wolfe method with “dual averaging” for strongly convex composite optimization

نویسندگان

چکیده

Abstract We propose a simple variant of the generalized Frank–Wolfe method for solving strongly convex composite optimization problems, by introducing an additional averaging step on dual variables. show that in this variant, one can choose constant step-size and obtain linear convergence rate duality gaps. By leveraging analysis we then analyze local logistic fictitious play algorithm, which is well-established game theory but lacks any form guarantees. that, with high probability, algorithm converges locally at O (1/ t ), terms certain expected gap.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A second order primal-dual method for nonsmooth convex composite optimization

We develop a second order primal-dual method for optimization problems in which the objective function is given by the sum of a strongly convex twice differentiable term and a possibly nondifferentiable convex regularizer. After introducing an auxiliary variable, we utilize the proximal operator of the nonsmooth regularizer to transform the associated augmented Lagrangian into a function that i...

متن کامل

A Primal-Dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms

We propose a new first-order splitting algorithm for solving jointly the primal and dual formulations of large-scale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitting approach, in the sense that the gradient and the linear operators involved are applied explici...

متن کامل

A Gauss - Newton method for convex composite optimization 1

An extension of the Gauss-Newton method for nonlinear equations to convex composite optimization is described and analyzed. Local quadratic convergence is established for the minimization of h o F under two conditions, namely h has a set of weak sharp minima, C, and there is a regular point of the inclusion F(x) E C. This result extends a similar convergence result due to Womersley (this journa...

متن کامل

A Gauss-Newton method for convex composite optimization

An extension of the Gauss{Newton method for nonlinear equations to convex composite optimization is described and analyzed. Local quadratic convergence is established for the minimization of h F under two conditions, namely h has a set of weak sharp minima, C, and there is a regular point of the inclusion F(x) 2 C. This result extends a similar convergence result due to Womersley which employs ...

متن کامل

Dual Averaging Method for Regularized Stochastic Learning and Online Optimization

We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop a new online algorithm, the regularized dual averaging (RDA) method, that can explicitly exploit the regularizatio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Optimization Letters

سال: 2022

ISSN: ['1862-4480', '1862-4472']

DOI: https://doi.org/10.1007/s11590-022-01951-0